skip to main content


Search for: All records

Creators/Authors contains: "Shahi, Pardeep"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. As the demand for faster and more reliable data processing is increasing in our daily lives, the power consumption of electronics and, correspondingly, Data Centers (DCs), also increases. It has been estimated that about 40% of this DCs power consumption is merely consumed by the cooling systems. A responsive and efficient cooling system would not only save energy and space but would also protect electronic devices and help enhance their performance. Although air cooling offers a simple and convenient solution for Electronic Thermal Management (ETM), it lacks the capacity to overcome higher heat flux rates. Liquid cooling techniques, on the other hand, have gained high attention due to their potential in overcoming higher thermal loads generated by small chip sizes. In the present work, one of the most commonly used liquid cooling techniques is investigated based on various conditions. The performance of liquid-to-liquid heat exchange is studied under multi-leveled thermal loads. Coolant Supply Temperature (CST) stability and case temperature uniformity on the Thermal Test Vehicles (TTVs) are the target indicators of the system performance in this study. This study was carried out experimentally using a rack-mount Coolant Distribution Unit (CDU) attached to primary and secondary cooling loops in a multi-server rack. The effect of various selected control settings on the aforementioned indicators is presented. Results show that the most impactful PID parameter when it comes to fluctuation reduction is the integral (reset) coefficient (IC). It is also concluded that fluctuation with amplitudes lower than 1 ᵒC is converged into higher amplitudes 
    more » « less
    Free, publicly-accessible full text available May 30, 2024
  2. In the United States, out of the total electricity produced, 2% of it is consumed by the data center facility, and up to 40% of its energy is utilized by the cooling infrastructure to cool all the heat-generating components present inside the facility, with recent technological advancement, the trend of power consumption has increased and as a consequence of increased energy consumption is the increase in carbon footprint which is a growing concern in the industry. In air cooling, the high heat- dissipating components present inside a server/hardware must receive efficient airflow for efficient cooling and to direct the air toward the components ducting is provided. In this study, the duct present in the air-cooled server is optimized and vanes are provided to improve the airflow, and side vents are installed over the sides of the server chassis before the duct is placed to bypass some of the cool air which is entering from the front where the hard drives are present. Experiments were conducted on the Cisco C220 air-cooled server with the new duct and the bypass provided, the effects of the new duct and bypass are quantified by comparing the temperature of the components such as the Central Processing Unit (CPUs), and Platform controller hub (PCH) and the savings in terms of total fan power consumption. A 7.5°C drop in temperature is observed and savings of up to 30% in terms of fan power consumption can be achieved with the improved design compared with the standard server. 
    more » « less
    Free, publicly-accessible full text available May 30, 2024
  3. Abstract Transistor density trends till recently have been following Moore's law, doubling every generation resulting in increased power density. The computational performance gains with the breakdown of Moore's law were achieved by using multicore processors, leading to nonuniform power distribution and localized high temperatures making thermal management even more challenging. Cold plate-based liquid cooling has proven to be one of the most efficient technologies in overcoming these thermal management issues. Traditional liquid-cooled data center deployments provide a constant flow rate to servers irrespective of the workload, leading to excessive consumption of coolant pumping power. Therefore, a further enhancement in the efficiency of implementation of liquid cooling in data centers is possible. The present investigation proposes the implementation of dynamic cooling using an active flow control device to regulate the coolant flow rates at the server level. This device can aid in pumping power savings by controlling the flow rates based on server utilization. The flow control device design contains a V-cut ball valve connected to a microservo motor used for varying the device valve angle. The valve position was varied to change the flow rate through the valve by servomotor actuation based on predecided rotational angles. The device operation was characterized by quantifying the flow rates and pressure drop across the device by changing the valve position using both computational fluid dynamics and experiments. The proposed flow control device was able to vary the flow rate between 0.09 lpm and 4 lpm at different valve positions. 
    more » « less
  4. Abstract Over the last decade, several hyper-scale data center companies such as Google, Facebook, and Microsoft have demonstrated the cost-saving capabilities of airside economization with direct/indirect heat exchangers by moving to chiller-less air-cooled data centers. Under pressure from data center owners, information technology equipment OEMs like Dell and IBM are developing information technology equipment that can withstand peak excursion temperature ratings of up to 45 °C, clearly outside the recommended envelope, and into ASHRAEs A4 allowable envelope. As popular and widespread as these cooling technologies are becoming, airside economization comes with its challenges. There is a risk of premature hardware failures or reliability degradation posed by uncontrolled fine particulate and gaseous contaminants in presence of temperature and humidity transients. This paper presents an in-depth review of the particulate and gaseous contamination-related challenges faced by the modern-day data center facilities that use airside economization. This review summarizes specific experimental and computational studies to characterize the airborne contaminants and associated failure modes and mechanisms. In addition, standard lab-based and in-situ test methods for measuring the corrosive effects of the particles and the corrosive gases, as the means of testing the robustness of the equipment against these contaminants, under different temperature and relative humidity conditions are also reviewed. It also outlines the cost-sensitive mitigation techniques like improved filtration strategies and methods that can be utilized for efficient implementation of airside economization. 
    more » « less
  5. Abstract Continuous rise in cloud computing and other web-based services propelled the data center proliferation seen over the past decade. Traditional data centers use vapor-compression-based cooling units that not only reduce energy efficiency but also increase operational and initial investment costs due to involved redundancies. Free air cooling and airside economization can substantially reduce the information technology equipment (ITE) cooling power consumption, which accounts for approximately 40% of energy consumption for a typical air-cooled data center. However, this cooling approach entails an inherent risk of exposing the ITE to harmful ultrafine particulate contaminants, thus, potentially reducing the equipment and component reliability. The present investigation attempts to quantify the effects of particulate contamination inside the data center equipment and ITE room using computational fluid dynamics (CFD). An analysis of the boundary conditions to be used was done by detailed modeling of ITE and the data center white space. Both two-dimensional and three-dimensional simulations were done for detailed analysis of particle transport within the server enclosure. An analysis of the effect of the primary pressure loss obstructions like heat sinks and dual inline memory modules inside the server was done to visualize the localized particle concentrations within the server. A room-level simulation was then conducted to identify the most vulnerable locations of particle concentration within the data center space. The results show that parameters such as higher velocities, heat sink cutouts, and higher aspect ratio features within the server tend to increase the particle concentration inside the servers. 
    more » « less
  6. null (Ed.)